6.3 Accuracy, Meaning, and Effect

67

Part of the difficulty is that the function (i.e., biological meaning) is not so con-

veniently quantifiable as the information content of the sequence encoding it. Even

considering the simpler problem of structure alone, there are various approaches

yielding very different answers. Supposing that a certain protein has a unique struc-

ture [most nonstructural proteins have, of course, several (at least two) structures in

order to function; the best-known example is probably haemoglobin]. This structure

could be specified by the coördinates of all the constituent atoms, or the dihedral

angles of each amino acid, listed in order of the sequence, and at a given resolution

[Dewey (1996, 1997) calls this the algorithmic complexity of a protein; cf. upper KK in

Eq. (6.13)]. If, however, protein structures come from a finite number of basic types,

it suffices to specify one of these types, which moves the problem back into one

dealing with Shannon-type information.

In the case of function, a useful starting point could be to consider the immune

system, in which the main criterion of function is the affinity of the antibody (or,

more precisely, the affinity of a small region of the antibody) to the target antigen.

The discussion of affinity and how affinities can lead to networks of interactions will

be dealt with in Chap. 23.

The problem of assigning meaning to a sign, or a message (a collection of signs),

is usually referred to as the semantic problem. Semantic information cannot be inter-

preted solely at the syntactical level. Just as a set of antibodies can be ranked in order

of affinity, so may a series of statements be ranked in order of semantic precision;

for example, consider the statements:

A train will leave.

A train will leave London today.

An express train will leave London Marylebone for Glasgow St

Enoch at 10:20 a.m. today.

and so on. Postal or e-mail addresses have a similar kind of syntactical hierarchy.

Although we are not yet able to assign numerical values to meanings, we can at least

order them.

Carnap and Bar-Hillel have framed a theory, rooted in Carnap’s theory of induc-

tive probability, attempting to do for semantics what Shannon did for the technical

content of a message. It deals with the semantic content of declarative sentences,

excluding the pragmatic aspects (dealing with the consequences or value of received

information for the recipient). It does not deal with the so-called semantic problem of

communication, which is concerned with the identity (or approach thereto) between

the intended meaning of the sender and the interpretation of meaning by the receiver:

Carnap and Bar-Hillel place this explicit involvement of sender and receiver in the

realm of pragmatics.

To gain a flavour of their approach, note that the semantic content of sen-

tence j j, conditional on having heard sentence ii, is content left parenthesis j vertical bar i right parenthesis equals content left parenthesis i ampersand j right parenthesis minus content left parenthesis i right parenthesiscontent( j|i) = content(i & j)

content(i), and their measure of information is defined as information left parenthesis i right parenthesis equals minus log Subscript 2 Baseline content left parenthesis NOT i right parenthesisinformation(i) =

log2 content(NOT i). They consider semantic noise (resulting in misinterpreta-

tion of a message, even though all of its individual elements have been perfectly

received) and semantic efficiency, which takes experience into account; for exam-